362 research outputs found

    Revisiting the Core Ontology and Problem in Requirements Engineering

    Full text link
    In their seminal paper in the ACM Transactions on Software Engineering and Methodology, Zave and Jackson established a core ontology for Requirements Engineering (RE) and used it to formulate the "requirements problem", thereby defining what it means to successfully complete RE. Given that stakeholders of the system-to-be communicate the information needed to perform RE, we show that Zave and Jackson's ontology is incomplete. It does not cover all types of basic concerns that the stakeholders communicate. These include beliefs, desires, intentions, and attitudes. In response, we propose a core ontology that covers these concerns and is grounded in sound conceptual foundations resting on a foundational ontology. The new core ontology for RE leads to a new formulation of the requirements problem that extends Zave and Jackson's formulation. We thereby establish new standards for what minimum information should be represented in RE languages and new criteria for determining whether RE has been successfully completed.Comment: Appears in the proceedings of the 16th IEEE International Requirements Engineering Conference, 2008 (RE'08). Best paper awar

    Nash bargaining in ordinal environments

    Get PDF
    We analyze the implications of Nash’s (1950) axioms in ordinal bargaining environments; there, the scale invariance axiom needs to be strenghtened to take into account all order-preserving transformations of the agents’ utilities. This axiom, called ordinal invariance, is a very demanding one. For two-agents, it is violated by every strongly individually rational bargaining rule. In general, no ordinally invariant bargaining rule satisfies the other three axioms of Nash. Parallel to Roth (1977), we introduce a weaker independence of irrelevant alternatives axiom that we argue is better suited for ordinally invariant bargaining rules. We show that the three-agent Shapley-Shubik bargaining rule uniquely satisfies ordinal invariance, Pareto optimality, symmetry, and this weaker independence of irrelevant alternatives axiom. We also analyze the implications of other independence axioms

    An efficient algorithm for learning with semi-bandit feedback

    Full text link
    We consider the problem of online combinatorial optimization under semi-bandit feedback. The goal of the learner is to sequentially select its actions from a combinatorial decision set so as to minimize its cumulative loss. We propose a learning algorithm for this problem based on combining the Follow-the-Perturbed-Leader (FPL) prediction method with a novel loss estimation procedure called Geometric Resampling (GR). Contrary to previous solutions, the resulting algorithm can be efficiently implemented for any decision set where efficient offline combinatorial optimization is possible at all. Assuming that the elements of the decision set can be described with d-dimensional binary vectors with at most m non-zero entries, we show that the expected regret of our algorithm after T rounds is O(m sqrt(dT log d)). As a side result, we also improve the best known regret bounds for FPL in the full information setting to O(m^(3/2) sqrt(T log d)), gaining a factor of sqrt(d/m) over previous bounds for this algorithm.Comment: submitted to ALT 201

    The Least-core and Nucleolus of Path Cooperative Games

    Full text link
    Cooperative games provide an appropriate framework for fair and stable profit distribution in multiagent systems. In this paper, we study the algorithmic issues on path cooperative games that arise from the situations where some commodity flows through a network. In these games, a coalition of edges or vertices is successful if it enables a path from the source to the sink in the network, and lose otherwise. Based on dual theory of linear programming and the relationship with flow games, we provide the characterizations on the CS-core, least-core and nucleolus of path cooperative games. Furthermore, we show that the least-core and nucleolus are polynomially solvable for path cooperative games defined on both directed and undirected network

    Covering Problems for Partial Words and for Indeterminate Strings

    Full text link
    We consider the problem of computing a shortest solid cover of an indeterminate string. An indeterminate string may contain non-solid symbols, each of which specifies a subset of the alphabet that could be present at the corresponding position. We also consider covering partial words, which are a special case of indeterminate strings where each non-solid symbol is a don't care symbol. We prove that indeterminate string covering problem and partial word covering problem are NP-complete for binary alphabet and show that both problems are fixed-parameter tractable with respect to kk, the number of non-solid symbols. For the indeterminate string covering problem we obtain a 2O(klogk)+nkO(1)2^{O(k \log k)} + n k^{O(1)}-time algorithm. For the partial word covering problem we obtain a 2O(klogk)+nkO(1)2^{O(\sqrt{k}\log k)} + nk^{O(1)}-time algorithm. We prove that, unless the Exponential Time Hypothesis is false, no 2o(k)nO(1)2^{o(\sqrt{k})} n^{O(1)}-time solution exists for either problem, which shows that our algorithm for this case is close to optimal. We also present an algorithm for both problems which is feasible in practice.Comment: full version (simplified and corrected); preliminary version appeared at ISAAC 2014; 14 pages, 4 figure

    The chaining lemma and its application

    Get PDF
    We present a new information-theoretic result which we call the Chaining Lemma. It considers a so-called “chain” of random variables, defined by a source distribution X(0)with high min-entropy and a number (say, t in total) of arbitrary functions (T1,…, Tt) which are applied in succession to that source to generate the chain (Formula presented). Intuitively, the Chaining Lemma guarantees that, if the chain is not too long, then either (i) the entire chain is “highly random”, in that every variable has high min-entropy; or (ii) it is possible to find a point j (1 ≤ j ≤ t) in the chain such that, conditioned on the end of the chain i.e. (Formula presented), the preceding part (Formula presented) remains highly random. We think this is an interesting information-theoretic result which is intuitive but nevertheless requires rigorous case-analysis to prove. We believe that the above lemma will find applications in cryptography. We give an example of this, namely we show an application of the lemma to protect essentially any cryptographic scheme against memory tampering attacks. We allow several tampering requests, the tampering functions can be arbitrary, however, they must be chosen from a bounded size set of functions that is fixed a prior

    Sequential decision making with vector outcomes

    Full text link
    We study a multi-round optimization setting in which in each round a player may select one of several actions, and each action produces an outcome vector, not observable to the player until the round ends. The final payoff for the player is computed by applying some known function f to the sum of all outcome vectors (e.g., the minimum of all coordinates of the sum). We show that standard notions of performance measure (such as comparison to the best single action) used in related expert and bandit settings (in which the payoff in each round is scalar) are not useful in our vector setting. Instead, we propose a different performance measure, and design algorithms that have vanishing regret with respect to our new measure

    Projective re-normalization for improving the behavior of a homogeneous conic linear system

    Get PDF
    In this paper we study the homogeneous conic system F : Ax = 0, x ∈ C \ {0}. We choose a point ¯s ∈ intC∗ that serves as a normalizer and consider computational properties of the normalized system F¯s : Ax = 0, ¯sT x = 1, x ∈ C. We show that the computational complexity of solving F via an interior-point method depends only on the complexity value ϑ of the barrier for C and on the symmetry of the origin in the image set H¯s := {Ax : ¯sT x = 1, x ∈ C}, where the symmetry of 0 in H¯s is sym(0,H¯s) := max{α : y ∈ H¯s -->−αy ∈ H¯s} .We show that a solution of F can be computed in O(sqrtϑ ln(ϑ/sym(0,H¯s)) interior-point iterations. In order to improve the theoretical and practical computation of a solution of F, we next present a general theory for projective re-normalization of the feasible region F¯s and the image set H¯s and prove the existence of a normalizer ¯s such that sym(0,H¯s) ≥ 1/m provided that F has an interior solution. We develop a methodology for constructing a normalizer ¯s such that sym(0,H¯s) ≥ 1/m with high probability, based on sampling on a geometric random walk with associated probabilistic complexity analysis. While such a normalizer is not itself computable in strongly-polynomialtime, the normalizer will yield a conic system that is solvable in O(sqrtϑ ln(mϑ)) iterations, which is strongly-polynomialtime. Finally, we implement this methodology on randomly generated homogeneous linear programming feasibility problems, constructed to be poorly behaved. Our computational results indicate that the projective re-normalization methodology holds the promise to markedly reduce the overall computation time for conic feasibility problems; for instance we observe a 46% decrease in average IPM iterations for 100 randomly generated poorly-behaved problem instances of dimension 1000 × 5000.Singapore-MIT Allianc
    corecore